Supported with v3.0.0+

There are two AI Controllers attached to the Gameboard prefab game object. The OpenAI Controller uses the OpenAI API and exposed the ability to request AI generated text. The Stability AI Controller uses the Stability AI API and exposes the ability to request AI Generate images.

Both of these AI Controllers require supplying your own OpenAI or Stability AI account to generate content from their APIs.

The Gameboard prefab is always available in your game as long as the SDK has been integrated. In order to access it from our custom Mono Behavior we just need to find it by its tag.

    GameObject gameboardObject = GameObject.FindWithTag("Gameboard");

With access to our Gameboard object we can then move on to optain a reference to the AI controllers.

We will first need to define a reference to each controller to make it available throughout our script.

    OpenAIController openAIController;
    StabilityAIController stabilityAIController;

We then want to define the controllers by getting them from the gameboardObject within the Start method.

This is also where you would want to specify the credentials for OpenAI or Stability AI.

void Start()
{
    openAIController = gameboardObject.GetComponent<OpenAIController>();
    openAIController.SetOpenAICredentials("{your-open-ai-api-key}", "{your-org-id}");

    stabilityAIController = gameboardObject.GetComponent<StabilityAIController>();
    stabilityAIController.SetStabilityAICredentials("{your-stability-ai-api-key}");
}

If you have an OpenAI account and are not sure what your API key or Organization ID is, check the OpenAI Documentation for guidance.

If you have a Stability AI account and are not sure what your API key is, check the Stability AI documentaiton for guidance.

WARNING: Make sure to follow best practices for API key safety.

Text Generation

OpenAI Controller implements OpenAI's chat capabilities to allow you to generate text responses from prompts. This works by sending a conversation to OpenAI, and it will send back a text response that would be the response based on the whole conversation.

With that in mind, there are currently two methods you can use to generate text from the OpenAI controller

Both of these methods return void, and to receive the response you need to subscribe to the OnChatResponseReceived event. This event is triggered once the AI generates and returns a text response.

You can subscribe to the event in the Start method.

Start()
{
    ...

    openAIController.OnChatResponseReceived += ReceivedChat;
}

public void ReceivedChat(string response)
{
    // Handle response text
}

Now you have everything setup to request messages from OpenAI.

You can send individual messages, which will be added on and tracked with currentMessage on the controller.

public void SendChatRequest()
{
    openAIController.SendOpenAIChatNextMessage("Say this is a test!"); // sends next message sent from user by default

    //OR

    openAIController.SendOpenAIChatNextMessage("I will be your assistant.", MessageRoleEnum.ASSISTANT); //specifying message role
}

If you want to start a fresh conversation while using SendChatRequest, you can clear all the current messages by calling ClearMessages.

Or you can send a whole conversation instead of sending per message.

NOTE: This overwrites the currentMessage list.

public void sendConversationRequest()
{
    openAIController.SendOpenAIChatConversation(new List<OpenAIMessage>() {
        new OpenAIMessage() {role = MessageRoleEnum.SYSTEM, content = "You are a helpful assistant."},
        new OpenAIMessage() {role = MessageRoleEnum.USER, content = "Hello!"}
    });
}

Image Generation (v3.0.1+)

Supported with v3.0.1+

OpenAI Controller implements OpenAI's image generation capabilities to allow you to generate image responses from prompts. This works by sending a prompt to OpenAI, and it will send back an image response based on the prompt.

You can use the method SendOpenAIImageRequest to get image reponses. To receive the image response you need to subscribe to the OnImageResponseReceived event. This event is triggered once the AI generates and returns an image response.

openAIController.SendOpenAIImageRequest("A cute baby sea otter");

This method also takes optional width/height properties. Along with optional imagesGenerated and style properties.

The width and height properties can only be the following:

The images generated property only supports being the default value of 1 for the dall-e-3 model.

The style property can either be Vivid or Natural currently. It is Vivid by default. From the openAI documentation:

Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.

Stability AI Controller implements Stability AI's image creation capabilities to allow you to generate images from prompts.

All of these methods return void, and to receive the image response you need to subscribe to the OnImageResponseReceived event. This event is triggered once the AI generates and returns the image response.

Start()
{
    ...

    stabilityAIController.OnImageResponseReceived += ReceivedImages;
}

public void ReceivedImages(Texture2D response)
{
    // Handle response image
}

SendTextToImageRequest

With the SendTextToImageRequest methods, you can send text to generate an image from. There are two methods you can call depending on how detailed your description is.

The first method just takes a string to generate an image from.

stabilityAIController.SendTextToImageRequest("A lighthouse on a cliff");

The second method takes a list of TextPrompt objects where you can specify the description and how much weight that description should have in the generation of the image.

stabilityAIController.SendTextToImageRequest(new List<TextPrompt>() {
    new TextPrompt() {weight = 0.5f, text = "Dog"},
    new TextPrompt() {weight = 0.5f, text = "Cat"}
});

Both methods have optional parameters for height/weight (which must be in increments of 64), and style.

stabilityAIController.SendTextToImageRequest("A lighthouse on a cliff", 512, 512, StyleEnum.ANIME);

stabilityAIController.SendTextToImageRequest(new List<TextPrompt>() {
    new TextPrompt() {weight = 0.2f, text = "Dog"},
    new TextPrompt() {weight = 0.8f, text = "Cat"}
}, 512, 512, StyleEnum.COMIC_BOOK);

SendImageToImageRequest

With the SendImageToImageRequest methods, you can send an image with a prompt to generate a new image based on the specified image. There are two methods you can call depending on how detailed your description is.

The image provided must have a height and width that is in increments of 64.

The first method takes a single image as a Texture2D object and a text description of what you want the generated image to be.

//Assume sendImage is a Texture2D object with the image you want to send
stabilityAIController.SendImageToImageRequest(sendImage, "A lighthouse on a cliff");

The second method takes a single image as a Texture2D object and a list of TextPrompt objects where you can specify the description and how much weight that description should have in the generation of the image.

stabilityAIController.SendImageToImageRequest(sendImage, new List<TextPrompt>() {
    new TextPrompt() {weight = 0.5f, text = "battle scene"},
    new TextPrompt() {weight = 0.5f, text = "fairies dancing"}
});

Although we are working in a managed environment, it is always a good idea to clean the listeners when no longer needed.

    void OnDestroy()
    {
        openAIController.OnChatResponseReceived -= ReceivedChat;
        stabilityAIController.OnImageResponseReceived -= ReceivedImages;
    }

This section include the entire code in one single, easy to copy section.

    OpenAIController openAIController;
    StabilityAIController stabilityAIController;

    void Start()
    {
        openAIController = gameboardObject.GetComponent<OpenAIController>();
        openAIController.SetOpenAICredentials("{your-open-ai-api-key}", "{your-org-id}");

        stabilityAIController = gameboardObject.GetComponent<StabilityAIController>();
        stabilityAIController.SetStabilityAICredentials("{your-stability-ai-api-key}");

        //Events
        openAIController.OnChatResponseReceived += ReceivedChat;
        stabilityAIController.OnImageResponseReceived += ReceivedImages;
    }

    public void ReceivedChat(string response)
    {
        // Handle response text
    }

    public void ReceivedImages(Texture2D response)
    {
        // Handle response image
    }

    public void sendChatRequest()
    {
        openAIController.SendOpenAIChatNextMessage("Say this is a test!"); // sends next message sent from user by default

        openAIController.SendOpenAIChatNextMessage("I will be your assistant.", MessageRoleEnum.ASSISTANT); //specifying message role
    }

    public void sendConversationRequest()
    {
        //Specify the whole chat stream. This will clear out the current messages send from SendOpenAIChatNextMessage and replace it with the list of specified messages
        openAIController.SendOpenAIChatConversation(new List<OpenAIMessage>() {
            new OpenAIMessage() {role = MessageRoleEnum.SYSTEM, content = "You are a helpful assistant."},
            new OpenAIMessage() {role = MessageRoleEnum.USER, content = "Hello!"}
        });
    }

    public void sendTextToImageRequest()
    {
        stabilityAIController.SendTextToImageRequest("A lighthouse on a cliff"); //simple request
        stabilityAIController.SendTextToImageRequest("A lighthouse on a cliff", 512, 512, StyleEnum.ANIME); //extra params

        //Multiple text prompts with weight of prompt
        stabilityAIController.SendTextToImageRequest(new List<TextPrompt>() {
            new TextPrompt() {weight = 0.5f, text = "Dog"},
            new TextPrompt() {weight = 0.5f, text = "Cat"}
        });

        //Extra params
        stabilityAIController.SendTextToImageRequest(new List<TextPrompt>() {
            new TextPrompt() {weight = 0.2f, text = "Dog"},
            new TextPrompt() {weight = 0.8f, text = "Cat"}
        }, 512, 512, StyleEnum.COMIC_BOOK);
    }

    public void SendImageToImageRequest()
    {
        stabilityAIController.SendImageToImageRequest(sendImage, "A lighthouse on a cliff"); //simple request

        //Multiple text prompts with weight of prompt
        stabilityAIController.SendImageToImageRequest(sendImage, new List<TextPrompt>() {
            new TextPrompt() {weight = 0.5f, text = "battle scene"},
            new TextPrompt() {weight = 0.5f, text = "fairies dancing"}
        });
    }

    void OnDestroy()
    {
        openAIController.OnChatResponseReceived -= ReceivedChat;
        stabilityAIController.OnImageResponseReceived -= ReceivedImages;
    }